翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

cross entropy : ウィキペディア英語版
cross entropy

In information theory, the cross entropy between two probability distributions over the same underlying set of events measures the average number of bits needed to identify an event drawn from the set, if a coding scheme is used that is optimized for an "unnatural" probability distribution q, rather than the "true" distribution p.
The cross entropy for the distributions p and q over a given set is defined as follows:
:H(p, q) = \operatorname_p(q ) = H(p) + D_}(p || q) is the Kullback–Leibler divergence of q from p (also known as the ''relative entropy'' of ''p'' with respect to ''q'' — note the reversal of emphasis).
For discrete p and q this means
:H(p, q) = -\sum_x p(x)\, \log q(x). \!
The situation for continuous distributions is analogous:
:-\int_X p(x)\, \log q(x)\, dx. \!
NB: The notation H(p,q) is also used for a different concept, the joint entropy of p and q.
== Motivation ==
In information theory, the Kraft–McMillan theorem establishes that any directly decodable coding scheme for coding a message to identify one value x_i out of a set of possibilities X can be seen as representing an implicit probability distribution q(x_i) = 2^ over X, where l_i is the length of the code for x_i in bits. Therefore, cross entropy can be interpreted as the expected message-length per datum when a wrong distribution Q is assumed, however the data actually follows a distribution P — that is why the expectation is taken over the probability distribution P and not Q.
:H(p, q) = \operatorname_p() = \operatorname_p\left(\frac\right )
:H(p, q) = \sum_ p(x_i)\, \log \frac \!
:H(p, q) = -\sum_x p(x)\, \log q(x). \!

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「cross entropy」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.